Score: 3

COMPASS: A Framework for Evaluating Organization-Specific Policy Alignment in LLMs

Published: January 5, 2026 | arXiv ID: 2601.01836v1

By: Dasol Choi , DongGeon Lee , Brigitta Jesica Kartono and more

BigTech Affiliations: BMW

Potential Business Impact:

Keeps AI from breaking company rules.

Business Areas:
Compliance Professional Services

As large language models are deployed in high-stakes enterprise applications, from healthcare to finance, ensuring adherence to organization-specific policies has become essential. Yet existing safety evaluations focus exclusively on universal harms. We present COMPASS (Company/Organization Policy Alignment Assessment), the first systematic framework for evaluating whether LLMs comply with organizational allowlist and denylist policies. We apply COMPASS to eight diverse industry scenarios, generating and validating 5,920 queries that test both routine compliance and adversarial robustness through strategically designed edge cases. Evaluating seven state-of-the-art models, we uncover a fundamental asymmetry: models reliably handle legitimate requests (>95% accuracy) but catastrophically fail at enforcing prohibitions, refusing only 13-40% of adversarial denylist violations. These results demonstrate that current LLMs lack the robustness required for policy-critical deployments, establishing COMPASS as an essential evaluation framework for organizational AI safety.

Country of Origin
πŸ‡°πŸ‡· πŸ‡©πŸ‡ͺ Korea, Republic of, Germany

Page Count
46 pages

Category
Computer Science:
Artificial Intelligence