Practical Adversarial Attacks on Brain--Computer InterfacesDownload PDF


Sep 29, 2021 (edited Nov 19, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: neuroscience, brain-computer interfaces, practical attacks, adversarial attacks, EEGNet, edge computing, embedded systems
  • Abstract: Deep learning has been widely employed in brain--computer interfaces (BCIs) to decode a subject's intentions based on recorded brain activities enabling direct interaction with computers and machines. BCI systems play a crucial role in motor rehabilitation and have recently experienced a significant market boost as consumer-grade products. Recent studies have shown that deep learning-based BCIs are vulnerable to adversarial attacks. Failures in such systems might cause medical misdiagnoses, physical harm, and financial damages, hence it is of utmost importance to analyze and understand in-depth, potential malicious attacks to develop countermeasures. In this work, we present the first study that analyzes and models adversarial attacks based on physical domain constraints in EEG-based BCIs. Specifically, we assess the robustness of EEGNet which is the current state-of-the-art network for embedded BCIs. We propose new methods to induce denial-of-service attacks and incorporate domain-specific insights and constraints to accomplish two key goals: (i) create smooth adversarial attacks that are physiologically plausible; (ii) consider the realistic case where the attack happens at the origin of the signal acquisition and it propagates on the human head. Our results show that EEGNet is significantly vulnerable to adversarial attacks with an attack success rate of more than 50\%. With our work, we want to raise awareness and incentivize future developments of proper countermeasures.
  • One-sentence Summary: We show that state-of-the-art deep learning-based BCIs are vulnerable to practical attacks.
  • Supplementary Material: zip
17 Replies