Interpretable Complex-Valued Neural Networks for Privacy Protection - ML Reproducibility Challenge 2020Download PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Abstract: Scope of Reproducibility The authors of the original work do not supply any code for their work. Therefore, our goal is to validate the main claims of the paper with our own implementation. The claims that we try to verify are whether the proposed complex-valued neural networks perform similar to traditional real-valued networks in classification tasks and whether the introduced method provides better protection against privacy attacks. Methodology For our own implementation we follow the author's description where possible.No explicit information about the training process and some architectures is mentioned, so we make our own assumptions where needed. Training all networks takes around 60 hours on a Nvidia RTX 2080 TI. Results In all experiments of our reproduction study, we were able to successfully validate the author's claim that the proposed network architectures provide better protection against privacy attacks. However, the observed benefits are not as extensive as suggested by the original results. We also observed strong performance degradation when using the proposed complex-valued architectures during some classification experiments. This contradicts the author's claim that the performance is on par with standard real-valued neural networks. What was easy The authors provide clear descriptions on how to transform a traditional neural network into a complex-valued neural network and how to implement the proposed complex-valued layers. Additionally, the paper does a good job at explaining how experiments are quantified. What was difficult Many implementation details are omitted in the paper, which made it difficult to get some parts to working as intended. Specifically, hyperparameter settings are not given which required us to make many assumptions. The large number of experiments also made it time consuming to test different hyperparameter settings and restricted us to only training one model per experiment. Communication with original authors Due to lack of time, we did not communicate with the authors.
Paper Url: https://openreview.net/forum?id=XX26O1HXupp
4 Replies

Loading