Switch Spaces: Learning Product Spaces with Sparse GatingDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: representation learning, product space
Abstract: Aligning the geometric inductive bias well with the underlying structure of data is critical for representation learning. To achieve this goal, we propose \textit{switch spaces}, a data-driven representation learning approach. Switch space is a generalization of product space which is composed of multiple euclidean and non-euclidean (e.g., hyperbolic, spherical) spaces. Given $N$ spaces, our model utilizes a sparse gating mechanism to let each input data point choose $K (K<N)$ spaces and combines them to form a product space. In doing so, ${}_{N}C_K$ product spaces are generated automatically in a single model and each input data point is processed by one of them with greater specialization, making the spaces switchable. In addition, switch space models are efficient and have a constant computational complexity regardless of the model size. We apply switch spaces to the knowledge graph (KG) completion task and propose \textit{SwisE} which obtains state-of-the-art performance on benchmark KG datasets. We also show that switch space can help achieve promising results on the item recommendation task. Model analysis is conducted to inspect the inner workings of switch space.
One-sentence Summary: We propose a data-driven representation learning approach.
Supplementary Material: zip
5 Replies

Loading