Spherization Layer: Representation Using Only AnglesDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 15 Dec 2022, 05:44NeurIPS 2022 AcceptReaders: Everyone
Keywords: representation learning, hyperspherical learning, angular similarity, spherization
TL;DR: Spherization Layer is an explicit solution for learning representations with only angles.
Abstract: In neural network literature, angular similarity between feature vectors is frequently used for interpreting or re-using learned representations. However, the inner product in neural networks partially disperses information over the scales and angles of the involved input vectors and weight vectors. Therefore, when using only angular similarity on representations trained with the inner product, information loss occurs in downstream methods, which limits their performance. In this paper, we proposed the $\textit{spherization layer}$ to represent all information on angular similarity. The layer 1) maps the pre-activations of input vectors into the specific range of angles, 2) converts the angular coordinates of the vectors to Cartesian coordinates with an additional dimension, and 3) trains decision boundaries from hyperplanes, without bias parameters, passing through the origin. This approach guarantees that representation learning always occurs on the hyperspherical surface without the loss of any information unlike other projection-based methods. Furthermore, this method can be applied to any network by replacing an existing layer. We validate the functional correctness of the proposed method in a toy task, retention ability in well-known image classification tasks, and effectiveness in word analogy test and few-shot learning. Code is publicly available at https://github.com/GIST-IRR/spherization_layer
Supplementary Material: pdf
13 Replies

Loading