Matching pre-training and Fine-tuning Methods for Knowledge Retrieval from pretrained Language ModelsDownload PDF

01 Jun 2022 (modified: 05 May 2023)ICML 2022 Workshop KRLM Readers: Everyone
Keywords: Pretrained Language Models, Commonsense Knowledge, KB Completion
Abstract: In this paper we study different methods for pre-training and fine-tuning a transformer-based language model for generating commonsense knowledge or the KB completion task in few-shot settings. The model can be trained in unsupervised and supervised methods with different pre-training objectives. We investigate the effect of each type of these training objectives on the performance of the model in knowledge generationand retrieval. We analyze the results from both plausibility and variety and novelty aspects. The results show that mixing both objectives in pre-training and fine-tuning stages can provide more novel and accurate results in few shot settings. These considerations can be taken into account for selecting and fine-tuning a model for a specific task.
0 Replies

Loading