The rapid growth of AR/VR/MR applications and cloud-based visual localization has heightened concerns over user privacy. This privacy concern has been further escalated by the ability of deep neural networks to recover detailed images of a scene from a sparse set of 3D or 2D points and their descriptors - the so-called inversion attacks. Research on privacy-preserving localization has therefore focused on preventing such attacks through geometry obfuscation techniques like lifting points to higher dimensions or swapping coordinates. In this paper, we reveal a common vulnerability in these methods that allows approximate point recovery using known neighborhoods. We further show that these neighborhoods can be computed by learning to identify descriptors that co-occur in neighborhoods. Extensive experiments demonstrate that all existing geometric obfuscation schemes remain susceptible to such recovery, challenging their claims of being privacy-preserving.
Keywords: Visual Localization, Privacy presrving representations, Privacy preserving localization
TL;DR: In obfuscated representations of keypoints or 3D scene points, having a rough estimate of neighborhoods allows for recovering back original points.
Abstract:
Supplementary Material: zip
Submission Number: 148
Loading