From Rogue to Safe AI: The Role of Explicit Refusals in Aligning LLMs with International Humanitarian Law
By: John Mavi, Diana Teodora Găitan, Sergio Coronado
Potential Business Impact:
AI learns to refuse illegal or harmful requests.
Large Language Models (LLMs) are widely used across sectors, yet their alignment with International Humanitarian Law (IHL) is not well understood. This study evaluates eight leading LLMs on their ability to refuse prompts that explicitly violate these legal frameworks, focusing also on helpfulness - how clearly and constructively refusals are communicated. While most models rejected unlawful requests, the clarity and consistency of their responses varied. By revealing the model's rationale and referencing relevant legal or safety principles, explanatory refusals clarify the system's boundaries, reduce ambiguity, and help prevent misuse. A standardised system-level safety prompt significantly improved the quality of the explanations expressed within refusals in most models, highlighting the effectiveness of lightweight interventions. However, more complex prompts involving technical language or requests for code revealed ongoing vulnerabilities. These findings contribute to the development of safer, more transparent AI systems and propose a benchmark to evaluate the compliance of LLM with IHL.
Similar Papers
Beyond I'm Sorry, I Can't: Dissecting Large Language Model Refusal
Computation and Language
Makes AI ignore safety rules to answer bad questions.
Should LLM Safety Be More Than Refusing Harmful Instructions?
Computation and Language
Makes AI safer with tricky hidden words.
Answer, Refuse, or Guess? Investigating Risk-Aware Decision Making in Language Models
Computation and Language
Helps AI know when to speak or stay quiet.