Detecting Ambiguity Aversion in Cyberattack Behavior to Inform Cognitive Defense Strategies
By: Stephan Carney , Soham Hans , Sofia Hirschmann and more
Adversaries (hackers) attempting to infiltrate networks frequently face uncertainty in their operational environments. This research explores the ability to model and detect when they exhibit ambiguity aversion, a cognitive bias reflecting a preference for known (versus unknown) probabilities. We introduce a novel methodological framework that (1) leverages rich, multi-modal data from human-subjects red-team experiments, (2) employs a large language model (LLM) pipeline to parse unstructured logs into MITRE ATT&CK-mapped action sequences, and (3) applies a new computational model to infer an attacker's ambiguity aversion level in near-real time. By operationalizing this cognitive trait, our work provides a foundational component for developing adaptive cognitive defense strategies.
Similar Papers
Quantifying Loss Aversion in Cyber Adversaries via LLM Analysis
Cryptography and Security
Helps computers spot hacker fears to stop attacks.
Evidence of Cognitive Biases in Capture-the-Flag Cybersecurity Competitions
Cryptography and Security
Helps computers learn how hackers think.
Guarding Against Malicious Biased Threats (GAMBiT): Experimental Design of Cognitive Sensors and Triggers with Behavioral Impact Analysis
Cryptography and Security
Tricks hackers' minds to stop cyberattacks.