[Reproducibility Report] Path Planning using Neural A* SearchDownload PDF

Anonymous

05 Feb 2022 (modified: 05 May 2023)ML Reproducibility Challenge 2021 Fall Blind SubmissionReaders: Everyone
TL;DR: Reproducibilty Report for MLRC Fall 2021 Challenge
Abstract: Reproducibility Summary The following paper is a reproducibility report for "Path Planning using Neural A* Search" by Yonetani et al. published in ICML 2021 as part of the ML Reproducibility Challenge 2021. The source code for our reimplementation and additional experiments performed is available for running. Scope of Reproducibility The original paper proposes the Neural A* planner, and claims it achieves an optimal balance between the reduction of node expansions and path accuracy. We verify this claim by reimplementing the model in a different framework and reproduce the data published in the original paper. We have also provided a code-flow diagram to aid comprehension of the code structure. As extensions to the original paper, we explore the effects of (1) generalizing the model by training it on a shuffled dataset, (2) Introducing dropout (3) implementing empirically chosen hyperparameters as trainable parameters in the model, (4) altering the network model to Generative Adversarial Networks (GANs) to introduce stochasticity, (5) modifying the encoder from Unet to Unet++ (6) incorporating cost maps obtained from the Neural A* module in other variations of A* search. Methodology We reimplemented the publicly available source code provided by the authors in Pytorch Lightning to encourage reproducibilty of the code and flexibility over different hardware setups. We reproduced the results published by the authors and also conducted additional experiments. The training code was run on Kaggle with GPU (Tesla P100-PCIE-16GB) and CPU (13GB RAM + 2-core of Intel Xeon). Results The claims of the original paper with successfully reproduced and validated within 3.2% of the reported values. Results for additional experiments mentioned above have also been included in the report. What was easy The code provided in the original repository was well structured and documented making it easy to understand and reimplement. The authors also provide the source code for dataset generation which made the task of reproducing the results fairly simple. What was difficult Experimentation on some datasets took a considerable amount of time, limiting our experiments to the MP Dataset. Results of runtime calculation could not be reproduced as they are affected by various factors, including dissimilarity in datasets, hardware environment and A* search implementation. Communication with original authors The authors were contacted via email regarding the computational requirements of training and errors faced, to which prompt and helpful replies were received.
Paper Url: http://proceedings.mlr.press/v139/yonetani21a/yonetani21a.pdf
Paper Venue: ICML 2021
4 Replies

Loading