HogVul: Black-box Adversarial Code Generation Framework Against LM-based Vulnerability Detectors
By: Jingxiao Yang , Ping He , Tianyu Du and more
Potential Business Impact:
Finds hidden computer bugs that other tools miss.
Recent advances in software vulnerability detection have been driven by Language Model (LM)-based approaches. However, these models remain vulnerable to adversarial attacks that exploit lexical and syntax perturbations, allowing critical flaws to evade detection. Existing black-box attacks on LM-based vulnerability detectors primarily rely on isolated perturbation strategies, limiting their ability to efficiently explore the adversarial code space for optimal perturbations. To bridge this gap, we propose HogVul, a black-box adversarial code generation framework that integrates both lexical and syntax perturbations under a unified dual-channel optimization strategy driven by Particle Swarm Optimization (PSO). By systematically coordinating two-level perturbations, HogVul effectively expands the search space for adversarial examples, enhancing the attack efficacy. Extensive experiments on four benchmark datasets demonstrate that HogVul achieves an average attack success rate improvement of 26.05\% over state-of-the-art baseline methods. These findings highlight the potential of hybrid optimization strategies in exposing model vulnerabilities.
Similar Papers
Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization
Cryptography and Security
Breaks AI's safety rules without seeing inside.
Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization
Cryptography and Security
Tricks AI into saying bad things, even when hidden.
ParaVul: A Parallel Large Language Model and Retrieval-Augmented Framework for Smart Contract Vulnerability Detection
Cryptography and Security
Finds hidden bugs in online money contracts.