SoK: a Comprehensive Causality Analysis Framework for Large Language Model Security
By: Wei Zhao, Zhe Li, Jun Sun
Potential Business Impact:
Makes AI safer by finding and fixing weak spots.
Large Language Models (LLMs) exhibit remarkable capabilities but remain vulnerable to adversarial manipulations such as jailbreaking, where crafted prompts bypass safety mechanisms. Understanding the causal factors behind such vulnerabilities is essential for building reliable defenses. In this work, we introduce a unified causality analysis framework that systematically supports all levels of causal investigation in LLMs, ranging from token-level, neuron-level, and layer-level interventions to representation-level analysis. The framework enables consistent experimentation and comparison across diverse causality-based attack and defense methods. Accompanying this implementation, we provide the first comprehensive survey of causality-driven jailbreak studies and empirically evaluate the framework on multiple open-weight models and safety-critical benchmarks including jailbreaks, hallucination detection, backdoor identification, and fairness evaluation. Our results reveal that: (1) targeted interventions on causally critical components can reliably modify safety behavior; (2) safety-related mechanisms are highly localized (i.e., concentrated in early-to-middle layers with only 1--2\% of neurons exhibiting causal influence); and (3) causal features extracted from our framework achieve over 95\% detection accuracy across multiple threat types. By bridging theoretical causality analysis and practical model safety, our framework establishes a reproducible foundation for research on causality-based attacks, interpretability, and robust attack detection and mitigation in LLMs. Code is available at https://github.com/Amadeuszhao/SOK_Casuality.
Similar Papers
SoK: Taxonomy and Evaluation of Prompt Security in Large Language Models
Cryptography and Security
Makes AI safer from bad instructions.
SoK: Taxonomy and Evaluation of Prompt Security in Large Language Models
Cryptography and Security
Makes AI safer from bad instructions.
SoK: Evaluating Jailbreak Guardrails for Large Language Models
Cryptography and Security
Protects AI from harmful instructions.