Towards Reproducible and Reusable Deep Learning Systems Research Artifacts

Thierry Moreau, Anton Lokhmotov, Grigori Fursin

Sep 30, 2018 NIPS 2018 Workshop MLOSS Submission readers: everyone
  • Abstract: This paper discusses results and insights from the 1st ReQuEST workshop, a collective effort to promote reusability, portability and reproducibility of deep learning research artifacts within the Architecture/PL/Systems communities. ReQuEST (Reproducible Quality-Efficient Systems Tournament) exploits the open-source Collective Knowledge framework (CK) to unify benchmarking, optimization, and co-design of deep learning systems implementations and exchange results via a live multi-objective scoreboard. Systems evaluated under ReQuEST are diverse and include an FPGA-based accelerator, optimized deep learning libraries for x86 and ARM systems, and distributed inference in Amazon Cloud and over a cluster of Raspberry Pis. We finally discuss limitations to our approach, and how we plan improve upon those limitations for the upcoming SysML artifact evaluation effort.
  • TL;DR: We describe insights from introducing reproducible and reusable artifact evaluation to the deep learning systems community.
  • Keywords: artifact evaluation, deep learning, systems, workflows, reproducibility, open-source
0 Replies

Loading