RoboSafe: Safeguarding Embodied Agents via Executable Safety Logic
By: Le Wang , Zonghao Ying , Xiao Yang and more
Potential Business Impact:
Keeps robots from doing dangerous things.
Embodied agents powered by vision-language models (VLMs) are increasingly capable of executing complex real-world tasks, yet they remain vulnerable to hazardous instructions that may trigger unsafe behaviors. Runtime safety guardrails, which intercept hazardous actions during task execution, offer a promising solution due to their flexibility. However, existing defenses often rely on static rule filters or prompt-level control, which struggle to address implicit risks arising in dynamic, temporally dependent, and context-rich environments. To address this, we propose RoboSafe, a hybrid reasoning runtime safeguard for embodied agents through executable predicate-based safety logic. RoboSafe integrates two complementary reasoning processes on a Hybrid Long-Short Safety Memory. We first propose a Backward Reflective Reasoning module that continuously revisits recent trajectories in short-term memory to infer temporal safety predicates and proactively triggers replanning when violations are detected. We then propose a Forward Predictive Reasoning module that anticipates upcoming risks by generating context-aware safety predicates from the long-term safety memory and the agent's multimodal observations. Together, these components form an adaptive, verifiable safety logic that is both interpretable and executable as code. Extensive experiments across multiple agents demonstrate that RoboSafe substantially reduces hazardous actions (-36.8% risk occurrence) compared with leading baselines, while maintaining near-original task performance. Real-world evaluations on physical robotic arms further confirm its practicality. Code will be released upon acceptance.
Similar Papers
AGENTSAFE: Benchmarking the Safety of Embodied Agents on Hazardous Instructions
Cryptography and Security
Tests robots to follow safe rules, not dangerous ones.
Safety Guardrails for LLM-Enabled Robots
Robotics
Keeps robots safe from bad robot commands.
RSafe: Incentivizing proactive reasoning to build robust and adaptive LLM safeguards
Artificial Intelligence
Keeps AI from saying bad things.