Hidden Schema NetworksDownload PDF

Published: 01 Feb 2023, Last Modified: 14 Oct 2024Submitted to ICLR 2023Readers: Everyone
Keywords: Discrete representation learning, Unsupervised knowledge graph learning, Relational inductive biases, Semantic representation, Pretrained language models, Discrete VAE, Neuro-symbolic AI, Language modelling
TL;DR: A neural language model that discovers networks of symbols (schemata) from text datasets via a VAE framework with pretrained BERT and GPT-2 as encoder and decoder, respectively.
Abstract: Most modern language models infer representations that, albeit powerful, lack both compositionality and semantic interpretability. Starting from the assumption that a large proportion of semantic content is necessarily relational, we introduce a neural language model that discovers networks of symbols (schemata) from text datasets. Using a variational autoencoder (VAE) framework, our model encodes sentences into sequences of symbols (composed representation), which correspond to the nodes visited by biased random walkers on a global latent graph. We first demonstrate that the model is able to uncover ground-truth graphs from artificially generated datasets of random token sequences. Next we leverage pretrained BERT and GPT-2 language models as encoder and decoder, respectively, to train our model on language modelling and commonsense knowledge generation tasks. Qualitatively, the model is able to infer schema networks whose nodes (symbols) can be interpreted as encoding different aspects of natural language (as e.g. topics, sentiments). Quantitatively, our results show that the model successfully interprets the encoded symbol sequences, as it achieves state-of-the-art scores on VAE language modeling benchmarks. Source code to reproduce all experiments is provided with the supplementary material.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/hidden-schema-networks/code)
18 Replies

Loading