TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language ModelsDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=tlVnmqE6r0
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment. This is especially a challenging problem because the research community still lacks a coherent dataset for assessing the adaptability of LMs to frequently-updated knowledge corpus such as Wikipedia. To this end, we introduce TemporalWiki, a lifelong benchmark for ever-evolving LMs that utilizes the difference between the consecutive snapshots of Wikipedia and Wikidata for training and evaluation, respectively. The benchmark hence allows one to periodically track an LM's ability to retain previous knowledge and acquire new or updated knowledge at each point in time. We also find that training an LM on the diff data with an adapter achieves similar or better perplexity than on the entire snapshot in our benchmark with 12 times less computational cost, which verifies that factual knowledge in LMs can be safely updated with minimal training data via continual learning. The dataset and the code will be available at this link.
0 Replies

Loading