Reproducibility report: Interpretable Complex-Valued Neural Networks For Privacy ProtectionDownload PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Keywords: Deep Neural Networks, Complex values, Encryption, Privacy Protection
Abstract: In this reproducibility report, the following two main claims of Xiang et al.'s paper are tested: - The performance of a Deep Neural Network (DNN) is largely preserved when comparing DNNs with complex encoded features to DNNs with non-encoded features. - It is more difficult for an attacker to reconstruct the original input from the complex encoded features as compared to the non-encoded features. Since the code was not made publicly available we implemented our own version of the reported DNNs. Baseline DNNs were created using the default model architecture. The figures and math of the original paper were used to recreate the structure of the complex-valued DNNs, in which the model is divided into an encoder, a processing module on the cloud, and a decoder. The goal of the complex-valued DNN is to make sure that the features are rotated and obfuscated to ensure that the privacy of the data is secured. We compare the performance of the baseline and complex-valued DNNs. Then, we test the robustness of the models against privacy attacks, where potential attackers were mimicked using inversion attacks. Overall, our results are not in line with the results of the original paper. We were not able to reach the performance of the baseline models that is obtained in the original paper. Additionally, our complex models obtain a much higher classification error than the baseline models, as opposed to the claim that the performance is largely preserved. Regarding the second claim, however, we did find evidence supporting the claim. The attacker trying to reconstruct the intermediate-level features had a harder time reconstructing the obfuscated features than it did when the features were left untouched. Creating the baseline DNNs and the inversion attacker was relatively easy because much of the code and additional information about these models can be found online. Additionally, how to create the complex DNNs by dividing the baseline DNNs into an encoder, processing module and decoder was clearly described in the original paper. Furthermore, the mathematics behind the overall optimization function of the proposed complex model was described in a clear way. The main difficulty arose from the fact that certain hyperparameters and model structures were not specified by the authors. This mainly led to problems with the implementation of the GAN-based encoder and the rotating of the features. Additionally, the complex tensor support in PyTorch caused a few problems in the backpropagation stage of the training process. Lastly, some of the modified layers for the complex-valued DNNs had no clear mathematical formulations. The combination of the previously mentioned factors made it difficult to reproduce the original results.
Paper Url: https://openreview.net/forum?id=XX26O1HXupp&noteId=El4SHc0RyIW&referrer=%5BML%20Reproducibility%20Challenge%202020%5D(%2Fgroup%3Fid%3DML_Reproducibility_Challenge%2F2020)
Supplementary Material: zip
4 Replies

Loading