Standardized Threat Taxonomy for AI Security, Governance, and Regulatory Compliance
By: Hernan Huwyler
Potential Business Impact:
Connects AI problems to money risks.
The accelerating deployment of artificial intelligence systems across regulated sectors has exposed critical fragmentation in risk assessment methodologies. A significant "language barrier" currently separates technical security teams, who focus on algorithmic vulnerabilities (e.g., MITRE ATLAS), from legal and compliance professionals, who address regulatory mandates (e.g., EU AI Act, NIST AI RMF). This disciplinary disconnect prevents the accurate translation of technical vulnerabilities into financial liability, leaving practitioners unable to answer fundamental economic questions regarding contingency reserves, control return-on-investment, and insurance exposure. To bridge this gap, this research presents the AI System Threat Vector Taxonomy, a structured ontology designed explicitly for Quantitative Risk Assessment (QRA). The framework categorizes AI-specific risks into nine critical domains: Misuse, Poisoning, Privacy, Adversarial, Biases, Unreliable Outputs, Drift, Supply Chain, and IP Threat, integrating 53 operationally defined sub-threats. Uniquely, each domain maps technical vectors directly to business loss categories (Confidentiality, Integrity, Availability, Legal, Reputation), enabling the translation of abstract threats into measurable financial impact. The taxonomy is empirically validated through an analysis of 133 documented AI incidents from 2025 (achieving 100% classification coverage) and reconciled against the main AI risk frameworks. Furthermore, it is explicitly aligned with ISO/IEC 42001 controls and NIST AI RMF functions to facilitate auditability.
Similar Papers
A Taxonomy of Data Risks in AI and Quantum Computing (QAI) - A Systematic Review
Cryptography and Security
Finds new ways computers can be hacked.
AI Risk Atlas: Taxonomy and Tooling for Navigating AI Risks and Resources
Computers and Society
Organizes AI dangers to help build safer AI.
CIA+TA Risk Assessment for AI Reasoning Vulnerabilities
Cryptography and Security
Protects smart programs from being tricked.