Adversarial Attacks on Deep Learning-based Floor Classification and Indoor Localization

Published: 01 Jan 2021, Last Modified: 27 Sept 2024WiseML@WiSec 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the great advances in location-based services (LBS), Wi-Fi localization has attracted great interest due to its ubiquitous availability in indoor environments. Deep neural network (DNN) is a powerful method to achieve high localization performance using Wi-Fi signals. However, DNN models are shown vulnerable to adversarial examples generated by introducing a subtle perturbation. In this paper, we propose adversarial deep learning for indoor localization system using Wi-Fi received signal strength indicator (RSSI). In particular, we study the impact of adversarial attacks on floor classification and location prediction with Wi-Fi RSSI. Three white-box attacks methods are examined, including fast gradient sign attack (FGSM), projected gradient descent (PGD), and momentum iterative method (MIM). We validate the performance of DNN-based floor classification and location prediction using a public dataset and show that the DNN models are highly vulnerable to the three white-box adversarial attacks.
Loading