Enhancing Pre-trained Language Models by Self-supervised Learning for Story Cloze TestOpen Website

2020 (modified: 12 May 2023)KSEM (1) 2020Readers: Everyone
Abstract: Story Cloze Test (SCT) gains increasing attention in evaluating the ability of story comprehension, which requires a story comprehension model to select the correct ending to a story context from two candidate endings. Recent advances, such as GPT and BERT, have shown success in incorporating a pre-trained transformer language model and fine-tuning operation to improve SCT. However, this framework still has some fundamental problems in effectively incorporating story-level knowledge from related corpus. In this paper, we introduce three self-supervised learning tasks (Drop, Replace and TOV) to transfer the story-level knowledge of ROCStories into the backbone model including vanilla BERT and Multi-Choice Head. We evaluate our approach on both SCT-v1.0 and SCT-v1.5 benchmarks. The experimental results demonstrate that our approach achieves state-of-the-art results compared with baseline models.
0 Replies

Loading