Abstract: Internet of Things (IoT) applications bring in a great convenience for human’s life, but users’ data privacy concern is the major barrier toward the development of IoT. <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity is a method to protect users’ data privacy, but it is presently known to suffer from inference attacks. Thus far, existing work only relies on a number of experimental examples to validate <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity’s performance against inference attacks, and thereby lacks of a theoretical guarantee. To tackle this issue, in this paper we propose the first theoretical foundation that gives a nonasymptotic bound on the performance of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity against inference attacks, taking into consideration of adversaries’ background information. The main idea is to first quantify adversaries’ background information, and from the point of the view of adversaries, classify users’ data into four kinds: 1) independent with unknown data values; 2) local dependent with unknown data values; 3) independent with certain known data values; and 4) local dependent with certain known data values. We then move one step further, theoretically proving the bound on the performance of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity corresponding to each of the four kinds of users’ data through cooperating with the noiseless privacy. We argue that such a theoretical foundation links <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity with noiseless privacy, theoretically proving <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity provides noiseless privacy. Additionally, this paper theoretically explains why <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity is vulnerable to inference attacks using the modified Stein method. Simulations on real check-in dataset from the location-based social network have validated our results. We believe that this paper can bridge the gap between design and evaluation, enabling a designer to construct a more practical <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula> -anonymity technique in real-life scenarios to resist inference attacks.
0 Replies
Loading