Physically-Constrained Adversarial Attacks on Brain-Machine InterfacesDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023TSRML2022Readers: Everyone
Keywords: neuroscience, brain-computer interfaces, practical attacks, adversarial attacks, EEGNet, edge computing, brain-machine interfaces
TL;DR: We design imperceptible adversarial attacks in brain-machine interfaces and model their propagation over the scalp under physical constraints.
Abstract: Deep learning (DL) has been widely employed in brain--machine interfaces (BMIs) to decode subjects' intentions based on recorded brain activities enabling direct interaction with machines. BMI systems play a crucial role in medical applications and have recently gained an increasing interest as consumer-grade products. Failures in such systems might cause medical misdiagnoses, physical harm, and financial loss. Especially with the current market boost of such devices, it is of utmost importance to analyze and understand in-depth, potential malicious attacks to develop countermeasures and avoid future damages. This work presents the first study that analyzes and models adversarial attacks based on physical domain constraints in DL-based BMIs. Specifically, we assess the robustness of EEGNet which is the current state-of-the-art network embedded in a real-world, wearable BMI. We propose new methods that incorporate domain-specific insights and constraints to design natural and imperceptible attacks and to realistically model signal propagation over the human scalp. Our results show that EEGNet is significantly vulnerable to adversarial attacks with an attack success rate of more than 50%.
3 Replies

Loading