Supersymmetric Artificial Neural NetworkDownload PDF

Anonymous

28 Mar 2019 (modified: 31 Jul 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x; θ, bar{θ})Tw parameterized by θ, bar{θ}, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
Keywords: Deep Learning, Differentiable Programming, Supersymmetry, Supermanifold, Supermathematics, Super-Lie Algebra
TL;DR: Generalizing backward propagation, using formal methods from supersymmetry.
0 Replies

Loading