Abstract: Adversarial machine learning (AML), by designing attacks that intentionally break or misuse state-of-the-art machine learning models, has become the most prominent scientific field to explore the security aspects of Artificial Intelligence. A whole range of vulnerabilities, previously irrelevant in traditional ICT, have effectively emerged in these studies. In the light of upcoming legislations mandating security requirements for AI products and services, there is a need to understand how AML techniques connect with the broader field of cybersecurity, and how to articulate more tightly threat models with realistic cybersecurity procedures. This article aims to contribute to closing the gap between AML and cybersecurity by proposing an approach to study the feasibility of an attack in a cybersecurity risk assessment framework, illustrated with a specific use case of an evasion attack designed to fool traffic sign recognition systems in the physical world. The importance of considering the feasibility of carrying out such attacks under real conditions is emphasized through the analysis of two factors: the reproducibility of the attack according to a published description or existing code, and the applicability of the attack by a malicious actor operating in a real-world environment.
Loading