Representation of ambiguity in pretrained models and the problem of domain specificityDownload PDF

Anonymous

17 Dec 2021 (modified: 05 May 2023)ACL ARR 2021 December Blind SubmissionReaders: Everyone
Abstract: Recent developments in pretrained language models have led to many advances in NLP. These models have excelled at learning powerful contextual representations from very large corpora. Fine-tuning these models for downstream tasks has been one of the most used (and successful) approaches to solving a plethora of NLP problems. But how capable are these models in capturing subtle linguistic traits like ambiguity in their representations? We present results from a probing task designed to test the capability of the models to identify ambiguous sentences under different experimental settings. The results show how different pretrained models fare against each other in the same task. We also explore how domain specificity limits the representational capabilities of the probes.
Paper Type: short
Consent To Share Data: yes
0 Replies

Loading