[Re] Solving Phase Retrieval With a Learned ReferenceDownload PDF

Published: 11 Apr 2022, Last Modified: 05 May 2023RC2021Readers: Everyone
Keywords: Phase Retrieval, Unrolled Network, Gradient Descent, Learned Reference
Abstract: Scope of Reproducibility This report reproduces the experiments and validates the results of the ECCV 2020 paper "Solving Phase Retrieval with a Learned Reference" by Hyder et al. The authors consider the task of recovering an unknown signal from its Fourier magnitudes, where the measurements are obtained after a reference image is added onto the signal. In order to solve this task a novel, iterative phase retrieval algorithm, presented as an unrolled network, that can train a such reference on a small amount of data is proposed. It is shown that the learned reference generalizes well to unseen data distributions and is robust to spatial data augmentation like shifting and rotation. Methodology We use the provided original code to reproduce the experiments from Hyder et al. that validate the proposed claims. Nevertheless, we refactor the code base to accelerate the performance and we extent it to carry out experiments where no code is available. We perform a hyperparameter search to investigate the influence and optimal values of the learning rates in both the training and retrieval process. Additionally, we do an ablation study to evaluate the necessary parts of the proposed algorithm. For our experiments we use a single NVIDIA TESLA P100 GPU with 16 GB RAM and approximately 100 computational hours for all experiments together. Results In general, we are able to reproduce the results of Hyder et al. Because of the hyperparameter search, we are certain that the results are not cherry-picked and mostly reproducible using the authors' implementation of the algorithm. With our additional experiments, we further strengthen the validity of the proposed method and help future researchers and practitioners by providing additional information on the learning rates in the training and retrieval process. What Was Easy The authors provide an implementation of their algorithm that is executable in our environment after exchanging deprecated functions. The considered datasets are open access, hence easy to use. Furthermore, the computational cost is fairly low such that we could run extensive experiments and even compare different hyperparameter settings. What Was Difficult We spend some effort to understand the authors' implementation, as it is marginally documented and the used computational tricks are not explained in detail. Moreover, it contains some redundant code which slows down computation. Beyond refactoring, we had to extent the implementation to be able to run our experiments. The lack of information about the learning rates slowed down the reproduction of the results, as we first had to investigate the influences on the training and retrieval process before we could adjust the parameters effectively. Communication With Original Authors We were in contact with the authors via mail and we would like to thank the authors for helping us. Especially, we thank Rakib Hyder who kindly answered all our questions regarding implementation details and hyperparameters and Salman Asif who was open for our implementation suggestions and provided useful feedback for this report.
Paper Url: https://openreview.net/forum?id=aGfr4iF_Li&noteId=dOsxraq3Vto
Supplementary Material: zip
4 Replies