Keywords: pruning, directionality, hierarchical organisation, perceptron
TL;DR: We show that orderedness, a measure of directionality in neural networks, can be increased in a non-directional network by applying appropriate pruning techniques in conjunction with gradient descent.
Abstract: Inspired by the prevalence of recurrent circuits in biological brains, we investigate the degree to which directionality is a helpful inductive bias for artificial neural networks. Taking directionality as topologically-ordered information flow between neurons, we formalise a perceptron layer with all-to-all connections (mathematically equivalent to a weight-tied recurrent neural network) and demonstrate that directionality, a hallmark of modern feed-forward networks, can be \emph{induced} rather than hard-wired by applying appropriate pruning techniques. Across different random seeds our pruning schemes successfully induce greater topological ordering in information flow between neurons without compromising performance, suggesting that directionality is \emph{not} a prerequisite for learning, but may be an advantageous inductive bias discoverable by gradient descent and sparsification.
Code: zip
Submission Number: 69
Loading