AGNES: Abstraction-Guided Framework for Deep Neural Networks Security

Akshay Dhonthi, Marcello Eiermann, Ernst Moritz Hahn, Vahid Hashemi

Published: 2024, Last Modified: 25 Mar 2026VMCAI (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep Neural Networks (DNNs) are becoming widespread, particularly in safety-critical areas. One prominent application is image recognition in autonomous driving, where the correct classification of objects, such as traffic signs, is essential for safe driving. Unfortunately, DNNs are prone to backdoors, meaning that they concentrate on attributes of the image that should be irrelevant for their correct classification. Backdoors are integrated into a DNN during training, either with malicious intent (such as a manipulated training process, because of which a yellow sticker always leads to a traffic sign being recognised as a stop sign) or unintentional (such as a rural background leading to any traffic sign being recognised as “animal crossing”, because of biased training data).
Loading