HUBERT Untangles BERT to Improve Transfer across NLP TasksDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We introduce HUBERT which combines the power of Tensor-Product Representations and BERT language model.
Abstract: We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional transformer language model. We validate the effectiveness of our model on the GLUE benchmark and HANS dataset. We also show that there is shared structure between different NLP datasets which HUBERT, but not BERT, is able to learn and leverage. Extensive transfer-learning experiments are conducted to confirm this proposition.
Keywords: Tensor Product Representation, BERT, Transfer Learning, Neuro-Symbolic Learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:1910.12647/code)
Original Pdf: pdf
10 Replies

Loading