Joint Representations of Text and Knowledge Graphs for Retrieval and EvaluationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Representation learning, Text generation, Knowledge bases, Evaluation
TL;DR: We learn joint representations for knowledge base elements and corresponding text, which allows to perform retrieval and referenceless adequacy evaluation
Abstract: A key feature of neural models is that they can produce semantic vector representations of objects (texts, images, speech, etc.) ensuring that similar objects are close to each other in the vector space. While much work has focused on learning text, image, knowledge-base (KB) and image-text representations, there are no aligned cross-modal text-KB representations. One challenge for learning such representations is the lack of parallel data. We train retrieval models on datasets of (graph, text) pairs where the graph is a KB subgraph and the text has been heuristically aligned with the graph. When performing retrieval on WebNLG, a clean parallel corpus, our best model achieves 80\% accuracy and 99\% recall@10, showing that similar texts and KB graphs are mapped close to each other. We use this property to create a similarity metric between English text and KB graphs, matching state-of-the-art metrics in terms of correlation with human judgments even though, unlike them, it does not require a reference text to compare against.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
4 Replies

Loading