XGUARD: A Graded Benchmark for Evaluating Safety Failures of Large Language Models on Extremist Content
By: Vadivel Abishethvarman , Bhavik Chandna , Pratik Jalan and more
Potential Business Impact:
Rates how dangerous AI-made text is.
Large Language Models (LLMs) can generate content spanning ideological rhetoric to explicit instructions for violence. However, existing safety evaluations often rely on simplistic binary labels (safe and unsafe), overlooking the nuanced spectrum of risk these outputs pose. To address this, we present XGUARD, a benchmark and evaluation framework designed to assess the severity of extremist content generated by LLMs. XGUARD includes 3,840 red teaming prompts sourced from real world data such as social media and news, covering a broad range of ideologically charged scenarios. Our framework categorizes model responses into five danger levels (0 to 4), enabling a more nuanced analysis of both the frequency and severity of failures. We introduce the interpretable Attack Severity Curve (ASC) to visualize vulnerabilities and compare defense mechanisms across threat intensities. Using XGUARD, we evaluate six popular LLMs and two lightweight defense strategies, revealing key insights into current safety gaps and trade-offs between robustness and expressive freedom. Our work underscores the value of graded safety metrics for building trustworthy LLMs.
Similar Papers
X-Guard: Multilingual Guard Agent for Content Moderation
Cryptography and Security
Makes AI safer for all languages.
Evaluating the Robustness of Large Language Model Safety Guardrails Against Adversarial Attacks
Cryptography and Security
Makes AI safer from bad instructions.
SGuard-v1: Safety Guardrail for Large Language Models
Computation and Language
Keeps AI from saying bad or dangerous things.