TL;DR: This paper presents the precursory tool-kit necessary for building and training an (efficient) network from scratch.
Abstract: Deep Neural Networks have become commercially viable in fields like machine vision, speech/language processing, data acquisition among other applications. Convolutional Neural Networks (CNNs), Recurrent Neural Network (RNNs) and their variants in some conditions have achieved performance better than human experts. However, existing deep network models are incompatible with low power devices or mission-critical applications due to either high computational & latency cost or memory storage, which makes them unfit to scale. Moreover, less effort has been put in making the architectural improvements modular or model-agnostic. In the developing regions of the world, efficient and frugal learning frameworks will have a huge socio-economic impact. AI can be a game-changer by enabling unique strategies to facilitate social good through domain-experts if ML-prototyping is intuitive. Thus, this paper serves a dual purpose, first is to present easily implementable structural modifications, and, second is to provide a comparative overview of prevalent compression techniques. Finally, we conclude this paper discussing and proposing possible challenges in these areas.
Keywords: deep neural network, prototyping, compression, efficiency
0 Replies
Loading