Test-time Augmentation for Factual Probing

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Question Answering
Submission Track 2: NLP Applications
Keywords: Factual Probing, TTA, Calibration
TL;DR: Application of test-time augmentation calibrates model confidemce, but challenges of yielding high-quality paraphrases result in inconsistent accuracy improvement.
Abstract: Factual probing is a method that uses prompts to test if a language model ``knows'' certain world knowledge facts. A problem in factual probing is that small changes to the prompt can lead to large changes in model output. Previous work aimed to alleviate this problem by optimizing prompts via text mining or fine-tuning. However, such approaches are relation-specific and do not generalize to unseen relation types. Here, we propose to use test-time augmentation (TTA) as a relation-agnostic method for reducing sensitivity to prompt variations by automatically augmenting and ensembling prompts at test time. Experiments show improved model calibration, i.e., with TTA, model confidence better reflects prediction accuracy. Improvements in prediction accuracy are observed for some models, but for other models, TTA leads to degradation. Error analysis identifies the difficulty of producing high-quality prompt variations as the main challenge for TTA.
Submission Number: 4268
Loading