Evaluating LLM Agent Adherence to Hierarchical Safety Principles: A Lightweight Benchmark for Probing Foundational Controllability Components
By: Ram Potham
Potential Business Impact:
Tests if AI follows safety rules when told to do other things.
Credible safety plans for advanced AI development require methods to verify agent behavior and detect potential control deficiencies early. A fundamental aspect is ensuring agents adhere to safety-critical principles, especially when these conflict with operational goals. This paper introduces a lightweight, interpretable benchmark to evaluate an LLM agent's ability to uphold a high-level safety principle when faced with conflicting task instructions. Our evaluation of six LLMs reveals two primary findings: (1) a quantifiable "cost of compliance" where safety constraints degrade task performance even when compliant solutions exist, and (2) an "illusion of compliance" where high adherence often masks task incompetence rather than principled choice. These findings provide initial evidence that while LLMs can be influenced by hierarchical directives, current approaches lack the consistency required for reliable safety governance.
Similar Papers
How to evaluate control measures for LLM agents? A trajectory from today to superintelligence
Artificial Intelligence
Tests AI to stop it from doing bad things.
A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents
Artificial Intelligence
Makes robots safer by teaching them risks.
Regulating the Agency of LLM-based Agents
Computers and Society
Controls AI to prevent it from causing harm.