Abstract: Transfer learning provides an effective solution for feasibly and fast customize accurate <i>Student</i> models, by transferring the learned knowledge of pre-trained <i>Teacher</i> models over large datasets via fine-tuning. Many pre-trained Teacher models used in transfer learning are publicly available and maintained by public platforms, increasing their vulnerability to backdoor attacks. In this article, we demonstrate a backdoor threat to transfer learning tasks on both image and time-series data leveraging the knowledge of publicly accessible Teacher models, aimed at defeating three commonly adopted defenses: <i>pruning-based</i> , <i>retraining-based</i> and <i>input pre-processing-based defenses</i> . Specifically, ( <inline-formula><tex-math notation="LaTeX">$\mathcal {A}$</tex-math></inline-formula> ) ranking-based selection mechanism to speed up the backdoor trigger generation and perturbation process while defeating <i>pruning-based</i> and/or <i>retraining-based defenses</i> . ( <inline-formula><tex-math notation="LaTeX">$\mathcal {B}$</tex-math></inline-formula> ) autoencoder-powered trigger generation is proposed to produce a robust trigger that can defeat the <i>input pre-processing-based defense</i> , while guaranteeing that selected neuron(s) can be significantly activated. ( <inline-formula><tex-math notation="LaTeX">$\mathcal {C}$</tex-math></inline-formula> ) defense-aware retraining to generate the manipulated model using reverse-engineered model inputs. We launch effective misclassification attacks on Student models over real-world images, brain Magnetic Resonance Imaging (MRI) data and Electrocardiography (ECG) learning systems. The experiments reveal that our enhanced attack can maintain the 98.4 and 97.2 percent classification accuracy as the genuine model on clean image and time series inputs while improving <inline-formula><tex-math notation="LaTeX">$27.9\%-100\%$</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">$27.1\%-56.1\%$</tex-math></inline-formula> attack success rate on trojaned image and time series inputs respectively in the presence of pruning-based and/or retraining-based defenses.
0 Replies
Loading