Learning without Forgetting for Decentralized Neural Nets with Low Communication OverheadDownload PDF

26 Sept 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: We consider the problem of training a neural net over a decentralized scenario with a low communication overhead. The problem is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting'. While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario. We use random walk based communication to handle a highly limited communication resource.
0 Replies

Loading