Score: 0

From Rogue to Safe AI: The Role of Explicit Refusals in Aligning LLMs with International Humanitarian Law

Published: June 5, 2025 | arXiv ID: 2506.06391v1

By: John Mavi, Diana Teodora Găitan, Sergio Coronado

Potential Business Impact:

AI learns to refuse illegal or harmful requests.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are widely used across sectors, yet their alignment with International Humanitarian Law (IHL) is not well understood. This study evaluates eight leading LLMs on their ability to refuse prompts that explicitly violate these legal frameworks, focusing also on helpfulness - how clearly and constructively refusals are communicated. While most models rejected unlawful requests, the clarity and consistency of their responses varied. By revealing the model's rationale and referencing relevant legal or safety principles, explanatory refusals clarify the system's boundaries, reduce ambiguity, and help prevent misuse. A standardised system-level safety prompt significantly improved the quality of the explanations expressed within refusals in most models, highlighting the effectiveness of lightweight interventions. However, more complex prompts involving technical language or requests for code revealed ongoing vulnerabilities. These findings contribute to the development of safer, more transparent AI systems and propose a benchmark to evaluate the compliance of LLM with IHL.

Page Count
15 pages

Category
Computer Science:
Computers and Society