Keywords: Strongly Self-Normalizing Neural Networks with Applications to Implicit Representation Learning
Abstract: Recent studies have show that wide neural networks with orthogonal linear layers and Gaussian Poincaré normalized activation functions avoid vanishing and exploding gradients for input vectors with the correct magnitude. This paper introduces a strengthening of the condition that the activation function must be Gaussian Poincaré normalized which creates robustness to deviations from standard normal distribution in the pre-activations, thereby reducing the dependence on the requirement that the network is wide and that the input vector has the correct magnitude. In implicit representation learning this allows the training of deep networks of this type where the linear layers are no longer constrained to be orthogonal linear transformations. Networks of this type can be fitted to a reference image to 1/10th the mean square error achievable with previous methods. Herein is also given an improved positional encoding for implicit representation learning of two-dimensional images and a small-batch training procedure for fitting of neural networks to images which allows fitting in fewer epochs, leading to substantial improvement in training time.
One-sentence Summary: Similar to SIREN, but able to fit images to higher accuracy (PSNR=67 instead PSNR=50 for a specific reference image).
1 Reply
Loading