How does BERT address polysemy of Korean adverbial postpositions -ey, -eyse, and -(u)lo?Download PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: polysemy, natural language processing, classification, language model, BERT, data visualization, Korean
Abstract: The present study reports computational accounts of resolving word-level polysemy in a lesser-studied language—Korean. Postpositions, which are characterized as multiple form-function mapping and thus polysemous, pose a challenge to automatic analysis and model performance in identifying their functions. In this study, we devised a classification model by employing BERT and introduces a computational simulation that interactively demonstrates how a BERT model simulates human interpretation of word-level polysemy involving Korean adverbial postpositions -ey, -eyse, and -(u)lo. Results reveal that (i) there is an inverse relationship between the classification accuracy and the number of functions that each postposition manifests, (ii) the model performance is affected by the corpus size of each function, and (iii) the performance gradually improves as the epoch proceeds.
One-sentence Summary: This study reports computational accounts of resolving word-level polysemy of Korean adverbial postpositions by employing BERT and visualization system.
Supplementary Material: zip
6 Replies

Loading