"Supersymmetric Artificial Neural Network"Download PDF

09 Feb 2018 (modified: 31 Jul 2020)ICLR 2018 Workshop SubmissionReaders: Everyone
  • Keywords: Deep Learning, Differentiable Programming, Supersymmetry, Supermanifold, Supermathematics, Super-Lie Algebra
  • TL;DR: Generalizing backward propagation, using formal methods from supersymmetry.
  • Abstract: The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x; θ, bar{θ})Tw parameterized by θ, bar{θ}, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example.
1 Reply