Adversarial Examples Against WiFi Fingerprint-Based Localization in the Physical World

Published: 2024, Last Modified: 16 May 2025IEEE Trans. Inf. Forensics Secur. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: WiFi Fingerprint-based Localization (WFL) has recently achieved promising results in the bloom of deep learning techniques. Unfortunately, current studies reveal the great risks of deep-learning models when facing adversarial attacks, raising broader concerns about Deep-learning-based WiFi Fingerprint Localization Models (DFLMs). However, real-world adversarial attacks targeting DFLMs are not fully investigated, making it unclear how to counter this potential threat. In this paper, we take the first step to introduce adversarial examples into the physical world against DFLMs. Specifically, we propose a general attack method named Phy-Adv, consisting of a physical attenuation loss and a differentiable simulation module, the generated adversarial noise could be feasibly produced in the real world and make effects on DFLMs, i.e., misleading the DFLMs from the signal source end. Furthermore, aiming at countering this typical adversarial threat, we propose a Relaxant Multiple Batch Normalization (RMBN) approach, which alleviates the weak robustness of DFLMs by the data-end adaptive training-set segmenting and model-end multiple batch normalization designing. To demonstrate the de facto effectiveness of the proposed physical adversarial examples and the adversarial defense strategy, we conducted extensive experiments on 2 datasets, i.e., BHD and TUT, and multiple deep models, e.g., AlexNet, VGG, and ResNet. The experimental results strongly support that our Phy-Adv shows satisfactory adversarial attacking ability in the physical world, meanwhile, the RMBN enjoys considerable defense ability against the adversarial attacks.
Loading