Towards Reproducible and Reusable Deep Learning Systems Research ArtifactsDownload PDF

Oct 29, 2018 (edited Jan 22, 2019)NIPS 2018 Workshop MLOSS Paper20 DecisionReaders: Everyone
  • Keywords: artifact evaluation, deep learning, systems, workflows, reproducibility, open-source
  • TL;DR: We describe insights from introducing reproducible and reusable artifact evaluation to the deep learning systems community.
  • Abstract: This paper discusses results and insights from the 1st ReQuEST workshop, a collective effort to promote reusability, portability and reproducibility of deep learning research artifacts within the Architecture/PL/Systems communities. ReQuEST (Reproducible Quality-Efficient Systems Tournament) exploits the open-source Collective Knowledge framework (CK) to unify benchmarking, optimization, and co-design of deep learning systems implementations and exchange results via a live multi-objective scoreboard. Systems evaluated under ReQuEST are diverse and include an FPGA-based accelerator, optimized deep learning libraries for x86 and ARM systems, and distributed inference in Amazon Cloud and over a cluster of Raspberry Pis. We finally discuss limitations to our approach, and how we plan improve upon those limitations for the upcoming SysML artifact evaluation effort.
  • Decision: accept
0 Replies