VICE: Variational Inference for Concept EmbeddingsDownload PDF

29 Sept 2021 (modified: 22 Oct 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: cognitive science, variational Bayes, category representation, sparse coding, representation learning, interpretable representations
Abstract: In this paper we introduce Variational Inference for Concept Embeddings (VICE), a novel method for learning object concept embeddings from human behavior in an odd-one-out task. We use variational inference to obtain a sparse, non-negative solution, with uncertainty information about each embedding value. We leverage this information in a statistical procedure for selecting the dimensionality of the model, based on hypothesis-testing over a validation set. VICE performs as well or better than previous methods on a variety of criteria: accuracy of predicting human behavior in an odd-one-out task, calibration to (empirical) human choice probabilities, reproducibility of object representations across different random initializations, and superior performance on small datasets. The latter is particularly important in cognitive science, where data collection is expensive. Finally, VICE yields highly interpretable object representations, allowing humans to describe the characteristics being represented by each latent dimension.
One-sentence Summary: A variational inference approach for learning sparse, non-negative object concept embeddings, from human behavior in an odd-one-out task.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2205.00756/code)
16 Replies

Loading