[Re] Synbols : Probing Learning Algorithms with Synthetic DatasetsDownload PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Keywords: Synthetic dataset generator, Computer vision
Abstract: Scope of Reproducibility : To assess the features of Synbols and its capacity to explore well known neural network architectures we decided to reproduce the results of the Supervised Learning classification and the Unsupervised Representation Learning experiments. We then generated some datasets with the same attributes (and random seed) to assure the results were consistent. Additionally, we tried to get further insights for the unsupervised task by modifying the pipeline and tweaking the classifier downstream. Methodology : Regarding our methodology, we predominantly followed author instructions and their publicly available code. For more computationally demanding models we ran the experiment using only seed of the same dataset. Modifications to the original code made in order to further explore some findings will be discussed later in the corresponding section. Results : We manage to reproduce the original results within a 2% margin of the reported values. We were pleasantly surprised given the number of models and datasets tested. And thus conclude that Synbols is a well-designed tool for rapidly generating a wide variety of low-resolution images of UTF-8 characters and strings. Our source code for generating the results and the datasets used are available publicly in the repository attached to this report . What was easy : We applaud authors reproducibility efforts and their availability whenever we had questions. A repository specifically made in order to facilitate the reproduction was available and an up-to-date docker image was also at our disposal to help generate more datasets with the tool. No hidden/forgotten assumptions were needed to reproduce their results. Thanks to those all those efforts our task was significantly simplified. What was difficult : Originally, for the two paradigms tested, twelve different models were trained. Although the important hyper-parameters were always mentioned or referenced, we sometimes struggled to check their implementation to see if everything was performed as reported. But authors always made time to explain implementation details that were more difficult to understand at first glance.
Paper Url: https://openreview.net/forum?id=IP6XoWTKDZg&noteId=twNSRb-4rSk
3 Replies

Loading