Local Learning with Neuron GroupsDownload PDF

Published: 21 Apr 2022, Last Modified: 20 Oct 2024Cells2Societies 2022 PosterReaders: Everyone
Keywords: Local learning, Model parallelism
TL;DR: We investigate applying local loss functions at a level of non-overlapping neuron groups.
Abstract: Traditional deep network training methods optimize a monolithic objective function jointly for all the components. This can lead to various inefficiencies in terms of potential parallelization. Local learning is an approach to model-parallelism that removes the standard end-to-end learning setup and utilizes local objective functions to permit parallel learning amongst model components in a deep network. Recent works have demonstrated that variants of local learning can lead to efficient training of modern deep networks. However, in terms of how much computation can be distributed, these approaches are typically limited by the number of layers in a network. In this work we propose to study how local learning can be applied by splitting layers or modules into sub-components, introducing a notion of width-wise modularity to the existing depth-wise modularity associated with local learning. We investigate local-learning penalties that permit such models to be trained efficiently. Our experiments on the CIFAR-10 dataset demonstrate that introducing width-level modularity can lead to computational advantages over existing methods based on local learning and opens potential opportunities for improved model-parallel training. This type of approach increases the potential of distribution and could be used as a backbone when conceiving collaborative learning frameworks.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/local-learning-with-neuron-groups/code)
0 Replies

Loading