Abstract: Indoor localization using radio frequency identification (RFID) has benefited from deep learning, yet models trained in a labeled source environment often degrade when deployed in a different, unlabeled target environment. Unsupervised domain adaptation (UDA) aims to mitigate this distribution shift by aligning a source-trained model with target-domain data. In practice, the source dataset is frequently unavailable at adaptation time due to privacy and resource constraints. This motivates source-free domain adaptation (SFDA); however, most SFDA methods have been developed for classification, and extending them to indoor localization (regression) is challenging, especially when target datasets are small and noisy. Motivated by the above limitations, we introduce mean-teacher localization (MTLoc), a source-free mean-teacher approach for indoor localization. MTLoc includes a student and a teacher network: the student network is updated using noisy target data with teacher-generated pseudo-labels. The teacher network maintains stability through exponential moving averages (EMAs). To further ensure robustness, we propose a correction mechanism in which the teacher’s pseudo-labels are refined using k-nearest-neighbor (k-NN) correction. MTLoc allows for self-supervised learning on target data, facilitating effective adaptation to dynamic and noisy indoor environments. Validated using real-world data from our experimental setup with INLAN Inc., our results (Code and datasets: https://github.com/negarmehregan/Indoor-Localization/tree/main) show that MTLoc achieves high localization accuracy under challenging conditions, significantly reducing distance error compared to baselines. On average, it reduces MAE by 20.0% on cross and 22.5% on square datasets. With confidence correction, these improvements reach 23.9% and 28.2%, respectively.
External IDs:doi:10.1109/jsen.2025.3637161
Loading