Abstract: Object detection stands as a fundamental component in numerous real-world applications, ranging from autonomous vehicles to security systems. As these technologies become increasingly embedded in our daily lives, ensuring the security and resilience of object detection models becomes critically essential. However, these models are vulnerable to adversarial attacks, where subtle alterations intentionally introduced into input data can mislead the model’s predictions. This paper explores the susceptibility of YOLO V8 and TensorFlow Object Detection models, such as MobileNet and ResNet, to adversarial attacks using the concept of attack transferability. Utilizing the Fast Gradient Sign Method alongside a distinct classifier model, we generate adversarial examples and evaluate the impact on object detection systems. Our analysis exposes a significant decline in object detection model’s performance in the presence of these adversarial examples, illustrating the transferability of attacks across different models. Our findings emphasize the critical necessity for robust defenses to safeguard object detection systems against transferable adversarial attacks.
Loading