Keywords: EEG BCI, electroencephalogram, backdoor attack, reinforcement learning, frequency transform
TL;DR: This paper proposes an invisible and robust backdoor attack for EEG BCIs.
Abstract: The electroencephalogram (EEG) based brain-computer interface (BCI) has taken the advantages of the tremendous success of deep learning (DL) models, gaining a wide range of applications. However, DL models have been shown to be vulnerable to backdoor attacks. Despite there are extensive successful attacks for image, designing a stealthy and effect attack for EEG is a non-trivial task. While existing EEG attacks mainly focus on single target class attack, and they either require engaging the training stage of the target DL models, or fail to maintain high stealthiness. Addressing these limitations, we exploit a novel backdoor attack called ManiBCI, where the adversary can arbitrarily manipulate which target class the EEG BCI will misclassify without engaging the training stage. Specifically, ManiBCI is a three-stages clean label poisoning attacks: 1) selecting one trigger for each class; 2) learning optimal injecting EEG electrodes and frequencies masks with reinforcement learning for each trigger; 3) injecting the corresponding trigger’s frequencies into poisoned data for each class by linearly interpolating the spectral amplitude of both data according to the learned masks. Experiments on three EEG datasets demonstrate the effectiveness and robustness of ManiBCI. The proposed ManiBCI also easily bypass existing backdoor defenses. Code will be published after the anonymous period.
Primary Area: Safety in machine learning
Submission Number: 6093
Loading