A Permutation-Invariant Representation of Neural Networks with Neuron EmbeddingsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: neural networks, neural network representation, neuroevolution, transfer learning
Abstract: Neural networks are traditionally represented in terms of their weights. A key property of this representation is that there are multiple representations of a network which can be obtained by permuting the order of the neurons. These representations are generally not compatible and attempting to transfer part of a network without the preceding layers is usually destructive to any learned relationships. This paper proposes a method by which a neural network is represented in terms of an embedding of the neurons rather than explicit weights. In addition to reducing the number of free parameters, this encoding is agnostic to the ordering of neurons, bypassing a key problem for weight-based representations. This allows us to transplant individual neurons and layers into another network and still maintain their functionality. This is particularly important for tasks like transfer learning and neuroevolution. We show through experiments on the MNIST and CIFAR10 datasets that this method is capable of representing networks which achieve identical performance to direct weight representation, and that transfer done this way preserves much of the performance between two networks that are distant in parameter space.
One-sentence Summary: We propose a permutation-invariant way of representing neural networks resulting in better parameter efficiency and cross-model transfer of information.
9 Replies

Loading