Score: 0

LoRA as a Flexible Framework for Securing Large Vision Systems

Published: May 31, 2025 | arXiv ID: 2506.00661v2

By: Zander W. Blasingame, Richard E. Neddo, Chen Liu

Potential Business Impact:

Fixes self-driving cars fooled by fake signs.

Business Areas:
Autonomous Vehicles Transportation

Adversarial attacks have emerged as a critical threat to autonomous driving systems. These attacks exploit the underlying neural network, allowing small -- nearly invisible -- perturbations to completely alter the behavior of such systems in potentially malicious ways. E.g., causing a traffic sign classification network to misclassify a stop sign as a speed limit sign. Prior working in hardening such systems to adversarial attacks have looked at robust training of the system or adding additional pre-processing steps to the input pipeline. Such solutions either have a hard time generalizing, require knowledge of the adversarial attacks during training, or are computationally undesirable. Instead, we propose to take insights for parameter efficient fine-tuning and use low-rank adaptation (LoRA) to train a lightweight security patch -- enabling us to dynamically patch a large preexisting vision system as new vulnerabilities are discovered. We demonstrate that our framework can patch a pre-trained model to improve classification accuracy by up to 78.01% in the presence of adversarial examples.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition