Why do embedding spaces look as they do?Download PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Abstract: The power of embedding representations is a curious phenomenon. For embeddings to work effectively as feature representations, there must exist substantial latent structure inherent in the domain to be encoded. Language vocabularies and Wikipedia topics are human-generated structures that reflect how people organize their world, and what they find important. The structure of the resulting embedding spaces reflects the human evolution of language formation and the cultural processes shaping our world. This paper studies what the observed structure of embeddings can tell us about the natural processes that generate new knowledge or concepts. We demonstrate that word and graph embeddings trained on standard datasets using several popular algorithms consistently share two distinct properties: (1) a decreasing neighbor frequency concentration with rank, and (2) specific clustering velocities and power-law based community structures. We then assess a variety of generative models of embedding spaces by these criteria, and conclude that incremental insertion processes based on the Barabási-Albert network generation process best model the observed phenomenon on language and network data.
11 Replies

Loading