LoRA as a Flexible Framework for Securing Large Vision Systems
Keywords: adversarial robustness, vit, autonomous driving, lora, security
TL;DR: We propose to use LoRA as a "security patch" for vision pipelines in autonomous driving systems.
Abstract: Adversarial attacks have emerged as a critical threat to autonomous driving systems.
These attacks exploit the underlying neural network, allowing small---almost invisible---perturbations to completely alter the behavior of such systems in potentially malicious ways,
E.g., causing a traffic sign classification network to misclassify a stop sign as a speed limit sign.
Prior work in hardening such systems to adversarial attacks has looked at robust training of the system or adding additional pre-processing steps to the input pipeline.
Such solutions either have a hard time generalizing, require knowledge of adversarial attacks during training, or are computationally undesirable.
Instead, we propose to take insights for parameter efficient fine-tuning and use low-rank adaptation (LoRA) to train a lightweight security patch---enabling us to dynamically patch a large pre-existing vision system as new vulnerabilities are discovered.
We demonstrate that our framework can patch a pre-trained model to improve classification accuracy by up to 24.09% in the presence of adversarial examples.
Submission Number: 310
Loading