EAGER: Edge-Aligned LLM Defense for Robust, Efficient, and Accurate Cybersecurity Question Answering
By: Onat Gungor , Roshan Sood , Jiasheng Zhou and more
Potential Business Impact:
Makes smart AI work on small devices securely.
Large Language Models (LLMs) are highly effective for cybersecurity question answering (QA) but are difficult to deploy on edge devices due to their size. Quantization reduces memory and compute requirements but often degrades accuracy and increases vulnerability to adversarial attacks. We present EAGER, an edge-aligned defense framework that integrates parameter-efficient quantization with domain-specific preference alignment to jointly optimize efficiency, robustness, and accuracy. Unlike prior methods that address these aspects separately, EAGER leverages Quantized Low-Rank Adaptation (QLoRA) for low-cost fine-tuning and Direct Preference Optimization (DPO) on a self-constructed cybersecurity preference dataset, eliminating the need for human labels. Experiments show that EAGER reduces adversarial attack success rates by up to 7.3x and improves QA accuracy by up to 55% over state-of-the-art defenses, while achieving the lowest response latency on a Jetson Orin, demonstrating its practical edge deployment.
Similar Papers
AQUA-LLM: Evaluating Accuracy, Quantization, and Adversarial Robustness Trade-offs in LLMs for Cybersecurity Question Answering
Cryptography and Security
Makes smart computer security programs smaller, faster, safer.
ELUTQ: Efficient LUT-Aware Quantization for Deploying Large Language Models on Edge Devices
Machine Learning (CS)
Makes smart AI run on phones, faster and smaller.
UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs
Machine Learning (CS)
Makes smart phone AI run much faster and smaller.