Testing and Verification of the Deep Neural Networks Against Sparse Pixel DefectsOpen Website

2022 (modified: 03 Nov 2022)SAFECOMP Workshops 2022Readers: Everyone
Abstract: Deep neural networks can produce outstanding results when applied to image recognition tasks but are susceptible to image defects and modifications. Substantial degradation of the image can be detected by automatic or interactive prevention techniques. However, sparse pixel defects may have a significant impact on the dependability of safety-critical systems, especially autonomous driving vehicles. Such perturbations can limit the perception capabilities of the system while remaining undetected by human observer. The effective generation of such cases facilitates the simulation of real-life challenges caused by sparse pixel defects, like occluded or stained objects. This work introduces a novel sparse adversarial attack generation method based on differential evolution strategy. Additionally, we introduce a novel framework for sparse adversarial attack generation, which can be integrated into the safety-critical systems development process. An empirical evaluation demonstrates that the proposed method outperforms and complements state-of-the-art techniques allowing for complete evaluation of an image recognition system.
0 Replies

Loading