ModulOM: Disseminating Deep Learning Research with Modular Output MathematicsDownload PDF

Published: 14 Apr 2021, Last Modified: 05 May 2023Rethinking ML Papers - ICLR 2021 workshop PosterReaders: Everyone
Keywords: deep learning, modularity, mathematics
Abstract: Solving a task with a deep neural network requires an appropriate formulation of the underlying inference problem. A formulation defines the type of variables output by the network, but also the set of variables and functions, denoted output mathematics, needed to turn those outputs into task-relevant predictions. Despite the fact that the task performance may largely depend on the formulation, most deep learning experiment repositories do not offer a convenient solution to explore formulation variants in a flexible and incremental manner. Software components for neural network creation, parameter optimization or data augmentation, in contrast, offer some degree of modularity that has proved to facilitate the transfer of know-how associated to model development. But this is not the case for output mathematics. Our paper proposes to address this limitation by embedding the output mathematics in a modular component as well, by building on multiple inheritance principles in object-oriented programming. The flexibility offered by the proposed component and its added value in terms of knowledge dissemination are demonstrated in the context of the Panoptic-Deeplab method, a representative computer vision use case.
TL;DR: Code for the task formulation around a neural network is difficult to reuse/modify, impeding research. We propose a solution to make them modular, alleviating this problem.
4 Replies

Loading