Evasive Ransomware Attacks Using Low-level Behavioral Adversarial Examples
By: Manabu Hirano, Ryotaro Kobayashi
Potential Business Impact:
Makes malware hide from computer defenses.
Protecting state-of-the-art AI-based cybersecurity defense systems from cyber attacks is crucial. Attackers create adversarial examples by adding small changes (i.e., perturbations) to the attack features to evade or fool the deep learning model. This paper introduces the concept of low-level behavioral adversarial examples and its threat model of evasive ransomware. We formulate the method and the threat model to generate the optimal source code of evasive malware. We then examine the method using the leaked source code of Conti ransomware with the micro-behavior control function. The micro-behavior control function is our test component to simulate changing source code in ransomware; ransomware's behavior can be changed by specifying the number of threads, file encryption ratio, and delay after file encryption at the boot time. We evaluated how much an attacker can control the behavioral features of ransomware using the micro-behavior control function to decrease the detection rate of a ransomware detector.
Similar Papers
Towards Low-Latency and Adaptive Ransomware Detection Using Contrastive Learning
Cryptography and Security
Finds new computer viruses faster and better.
A Practical Adversarial Attack against Sequence-based Deep Learning Malware Classifiers
Cryptography and Security
Makes malware trick security programs better.
Effectiveness of Adversarial Benign and Malware Examples in Evasion and Poisoning Attacks
Cryptography and Security
Tricks antivirus into flagging good files as bad.